6 research outputs found

    Improving Online Education Using Big Data Technologies

    Get PDF
    In a world in full digital transformation, where new information and communication technologies are constantly evolving, the current challenge of Computing Environments for Human Learning (CEHL) is to search the right way to integrate and harness the power of these technologies. In fact, these environments face many challenges, especially the increased demand for learning, the huge growth in the number of learners, the heterogeneity of available resources as well as the problems related to the complexity of intensive processing and real-time analysis of data produced by e-learning systems, which goes beyond the limits of traditional infrastructures and relational database management systems. This chapter presents a number of solutions dedicated to CEHL around the two big paradigms, namely cloud computing and Big Data. The first part of this work is dedicated to the presentation of an approach to integrate both emerging technologies of the big data ecosystem and on-demand services of the cloud in the e-learning field. It aims to enrich and enhance the quality of e-learning platforms relying on the services provided by the cloud accessible via the internet. It introduces distributed storage and parallel computing of Big Data in order to provide robust solutions to the requirements of intensive processing, predictive analysis, and massive storage of learning data. To do this, a methodology is presented and applied which describes the integration process. In addition, this chapter also addresses the deployment of a distributed e-learning architecture combining several recent tools of the Big Data and based on a strategy of data decentralization and the parallelization of the treatments on a cluster of nodes. Finally, this article aims to develop a Big Data solution for online learning platforms based on LMS Moodle. A course recommendation system has been designed and implemented relying on machine learning techniques, to help the learner select the most relevant learning resources according to their interests through the analysis of learning traces. The realization of this system is done using the learning data collected from the ESTenLigne platform and Spark Framework deployed on Hadoop infrastructure

    Large-scale e-learning recommender system based on Spark and Hadoop

    No full text
    Abstract The present work is a part of the ESTenLigne project which is the result of several years of experience for developing e-learning in Sidi Mohamed Ben Abdellah University through the implementation of open, online and adaptive learning environment. However, this platform faces many challenges, such as the increasing amount of data, the diversity of pedagogical resources and a large number of learners that makes harder to find what the learners are really looking for. Furthermore, most of the students in this platform are new graduates who have just come to integrate higher education and who need a system to help them to take the relevant courses that take into account the requirements and needs of each learner. In this article, we develop a distributed courses recommender system for the e-learning platform. It aims to discover relationships between student’s activities using association rules method in order to help the student to choose the most appropriate learning materials. We also focus on the analysis of past historical data of the courses enrollments or log data. The article discusses particularly the frequent itemsets concept to determine the interesting rules in the transaction database. Then, we use the extracted rules to find the catalog of more suitable courses according to the learner’s behaviors and preferences. Next, we deploy our recommender system using big data technologies and techniques. Especially, we implement parallel FP-growth algorithm provided by Spark Framework and Hadoop ecosystem. The experimental results show the effectiveness and scalability of the proposed system. Finally, we evaluate the performance of Spark MLlib library compared to traditional machine learning tools including Weka and R

    A low-cost toolbox for high-resolution vulnerability and hazard-perception mapping in view of tsunami risk mitigation: application to New Caledonia

    No full text
    The drive towards improving tsunami risk mitigation has intensified along many populated coastlines. Like many islands in the Pacific Ocean, the coastal population of New Caledonia is exposed to tsunamis triggered by powerful earthquakes. Intersecting exhaustive population data with high-resolution building location data within a user-defined coastal fringe is an accurate means of geolocating vulnerable residents, and an important step towards disaster risk reduction. This paper presents a mixed methodology built on GIS-based dasymetric techniques for assessing, classifying, and mapping population distribution in New Caledonia, with the aim of quantifying and ranking the areas most vulnerable to tsunami-related hazards. Results reveal that 33% of the population, inclusive of previously unmapped precarious housing, lives between sea level and the 10 m elevation contour in well-defined clusters. A pilot field survey of 412 respondents was additionally conducted in the capital Nouméa (66% of the nation’s population) to assess tsunami awareness, risk perception, and risk-related behavioral patterns among the ethnically and demographically diverse population. By further mapping the spatial association between coastal population concentrations, the perceived natural shielding capacities of coral reefs and mangroves, and the benefits of alarm siren networks, the study delivers a comprehensive assessment of the country’s disaster preparedness, with policy recommendations for the future. The methodology is transferable to other types of hazards and other insular settings where civil security and risk-management organizations acquire and curate reliable primary data but may also need guidelines for transforming them into serviceable disaster risk reduction methods and policies
    corecore